在医学成像中,表面注册广泛用于进行解剖结构之间的系统比较,其中一个很好的例子是高度复杂的脑皮质表面。为了获得有意义的注册,一种共同的方法是识别表面上的突出特征,并使用编码为具有里程碑意义的约束的特征对应关系来建立它们之间的较低距离映射。先前的注册工作主要集中于使用手动标记的地标并解决高度非线性优化问题,这些问题是耗时的,因此阻碍了实际应用。在这项工作中,我们提出了一个新的框架,用于使用准融合形式的几何形状和卷积神经网络自动地标检测和注册脑皮质表面。我们首先开发了一个具有里程碑意义的检测网络(LD-NET),该网络允许根据表面几何形状自动提取具有标志性的曲线的地标曲线。然后,我们利用检测到的地标和准符号理论来实现表面登记。具体而言,我们开发了一个系数预测网络(CP-NET),用于预测与所需地标的注册相关的Beltrami系数和一个称为磁盘Beltrami求解器网络(DBS-net)的映射网络,用于从预测的Quasi-grom-grom-groun Beltrami系数,具有准符号理论所保证的徒。提出了实验结果,以证明我们提出的框架的有效性。总的来说,我们的工作为基于表面的形态计算和医学形态分析铺平了一种新方法。
translated by 谷歌翻译
在许多生物医学应用中,开放和闭合解剖表面的参数化具有基本的重要性。球形谐波,在单元球上定义的一组基函数,广泛用于解剖结构描述。然而,在物体表面和整个单元球之间建立一对一的对应关系可以引起大的几何失真,以便表面的形状与完美球体过于不同。在这项工作中,我们提出了具有简单连接的打开和闭合表面的自适应区域保护的参数化方法,该封闭表面具有参数化为球形帽。我们的方法优化参数域的形状以及从对象曲面到参数域的映射。物体表面将以面部保存的方式全局映射到单位球的最佳球形帽区域,同时也表现出低保形失真。我们进一步开发了一组在自适应球形帽结构域中定义的一组球形谐波的基本函数,我们称之为自适应谐波。实验结果表明,所提出的参数化方法在面积和角度失真方面优于开放和闭合解剖表面的现有方法。使用自适应参数化和自适应谐波的新颖组合可以有效地实现物体表面的表面描述。我们的作品提供了一种新颖的方式,可以提高准确性和更大的灵活性映射解剖结构。更广泛地,使用自适应参数域的想法允许易于处理各种生物医学形状。
translated by 谷歌翻译
ICECUBE是一种用于检测1 GEV和1 PEV之间大气和天体中微子的光学传感器的立方公斤阵列,该阵列已部署1.45 km至2.45 km的南极的冰盖表面以下1.45 km至2.45 km。来自ICE探测器的事件的分类和重建在ICeCube数据分析中起着核心作用。重建和分类事件是一个挑战,这是由于探测器的几何形状,不均匀的散射和冰中光的吸收,并且低于100 GEV的光,每个事件产生的信号光子数量相对较少。为了应对这一挑战,可以将ICECUBE事件表示为点云图形,并将图形神经网络(GNN)作为分类和重建方法。 GNN能够将中微子事件与宇宙射线背景区分开,对不同的中微子事件类型进行分类,并重建沉积的能量,方向和相互作用顶点。基于仿真,我们提供了1-100 GEV能量范围的比较与当前ICECUBE分析中使用的当前最新最大似然技术,包括已知系统不确定性的影响。对于中微子事件分类,与当前的IceCube方法相比,GNN以固定的假阳性速率(FPR)提高了信号效率的18%。另外,GNN在固定信号效率下将FPR的降低超过8(低于半百分比)。对于能源,方向和相互作用顶点的重建,与当前最大似然技术相比,分辨率平均提高了13%-20%。当在GPU上运行时,GNN能够以几乎是2.7 kHz的中位数ICECUBE触发速率的速率处理ICECUBE事件,这打开了在在线搜索瞬态事件中使用低能量中微子的可能性。
translated by 谷歌翻译
数据增强是自然语言处理(NLP)模型的鲁棒性评估的重要组成部分,以及增强他们培训的数据的多样性。在本文中,我们呈现NL-Cogmenter,这是一种新的参与式Python的自然语言增强框架,它支持创建两个转换(对数据的修改)和过滤器(根据特定功能的数据拆分)。我们描述了框架和初始的117个变换和23个过滤器,用于各种自然语言任务。我们通过使用其几个转换来分析流行自然语言模型的鲁棒性来证明NL-Upmenter的功效。基础架构,Datacards和稳健性分析结果在NL-Augmenter存储库上公开可用(\ url {https://github.com/gem-benchmark/nl-augmenter})。
translated by 谷歌翻译
Three main points: 1. Data Science (DS) will be increasingly important to heliophysics; 2. Methods of heliophysics science discovery will continually evolve, requiring the use of learning technologies [e.g., machine learning (ML)] that are applied rigorously and that are capable of supporting discovery; and 3. To grow with the pace of data, technology, and workforce changes, heliophysics requires a new approach to the representation of knowledge.
translated by 谷歌翻译
SchNetPack is a versatile neural networks toolbox that addresses both the requirements of method development and application of atomistic machine learning. Version 2.0 comes with an improved data pipeline, modules for equivariant neural networks as well as a PyTorch implementation of molecular dynamics. An optional integration with PyTorch Lightning and the Hydra configuration framework powers a flexible command-line interface. This makes SchNetPack 2.0 easily extendable with custom code and ready for complex training task such as generation of 3d molecular structures.
translated by 谷歌翻译
We developed a simulator to quantify the effect of changes in environmental parameters on plant growth in precision farming. Our approach combines the processing of plant images with deep convolutional neural networks (CNN), growth curve modeling, and machine learning. As a result, our system is able to predict growth rates based on environmental variables, which opens the door for the development of versatile reinforcement learning agents.
translated by 谷歌翻译
Synthetic data generation has recently gained widespread attention as a more reliable alternative to traditional data anonymization. The involved methods are originally developed for image synthesis. Hence, their application to the typically tabular and relational datasets from healthcare, finance and other industries is non-trivial. While substantial research has been devoted to the generation of realistic tabular datasets, the study of synthetic relational databases is still in its infancy. In this paper, we combine the variational autoencoder framework with graph neural networks to generate realistic synthetic relational databases. We then apply the obtained method to two publicly available databases in computational experiments. The results indicate that real databases' structures are accurately preserved in the resulting synthetic datasets, even for large datasets with advanced data types.
translated by 谷歌翻译
Artificial Intelligence (AI) and its data-centric branch of machine learning (ML) have greatly evolved over the last few decades. However, as AI is used increasingly in real world use cases, the importance of the interpretability of and accessibility to AI systems have become major research areas. The lack of interpretability of ML based systems is a major hindrance to widespread adoption of these powerful algorithms. This is due to many reasons including ethical and regulatory concerns, which have resulted in poorer adoption of ML in some areas. The recent past has seen a surge in research on interpretable ML. Generally, designing a ML system requires good domain understanding combined with expert knowledge. New techniques are emerging to improve ML accessibility through automated model design. This paper provides a review of the work done to improve interpretability and accessibility of machine learning in the context of global problems while also being relevant to developing countries. We review work under multiple levels of interpretability including scientific and mathematical interpretation, statistical interpretation and partial semantic interpretation. This review includes applications in three areas, namely food processing, agriculture and health.
translated by 谷歌翻译
To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译